Most of the smart phone users prefer to read the news via social media over internet. The news websites are publishing the news and provide the source of authentication. The question is how to authenticate the news and articles which are circulated among social media like WhatsApp groups, Facebook Pages, Twitter and other micro blogs & social networking sites. It is harmful for the society to believe on the rumors and pretend to be a news. The need of an hour is to stop the rumors especially in the developing countries like India, and focus on the correct, authenticated news articles. This paper demonstrates a model and the methodology for fake news detection. With the help of Machine learning and natural language processing, it is tried to aggregate the news and later determine whether the news is real or fake using Support Vector Machine. The results of the proposed model is compared with existing models. The proposed model is working well and defining the correctness of results up to 93.6% of accuracy.
Introduction
The text discusses the growing problem of fake news in the digital age, where anyone can publish content online, leading to the rapid spread of misinformation—especially on platforms like Facebook, Instagram, Twitter, and WhatsApp. Such false information can mislead people, create panic, and even lead to serious consequences like violence and mob lynching.
To address this issue, the project focuses on building a fake news detection system using artificial intelligence. The main objective is to classify news as “real” or “fake” and prevent the spread of harmful misinformation. If a news item is identified as fake, the system suggests verified and relevant articles to the user.
The study reviews earlier approaches, starting from traditional machine learning models like Support Vector Machines (SVM), Naïve Bayes, and Logistic Regression, which use text-based features to detect fake news. More advanced methods include deep learning models such as RNNs, CNNs, and hybrid systems that also consider user behavior and how news spreads over time. Datasets like LIAR dataset and platforms like PolitiFact have been used to improve detection accuracy.
The proposed system architecture combines text analysis, contextual information, and validation processes. It undergoes thorough testing (functional, integration, and performance) before deployment on a secure server, ensuring reliability and scalability.
Overall, the system aims to reduce misinformation, improve public awareness, and protect society by accurately identifying and limiting the spread of fake news.
Conclusion
In today’s digital age, the spread of misinformation through fake news has become one of the most critical challenges for online platforms and society as a whole. This research focused on developing and analyzing fake news detection techniques using Natural Language Processing (NLP) and Machine Learning (ML) models. The study demonstrated that traditional machine learning algorithms such as Support Vector Machine (SVM) and Random Forest are limited in understanding deeper linguistic patterns and contextual meanings present in complex sentences. Deep learning approaches like LSTM and transformerbased models such as BERT have shown significant improvements in detection accuracy. The BERT model, in particular, achieved the best results due to its ability to understand the contextual relationships between words through its bidirectional attention mechanism. The experimental findings confirm that integrating NLP with advanced deep learning techniques provides an effective solution for identifying and filtering fake news across multiple domains and sources. The implementation of such systems can benefit media organizations, fact-checking agencies, and social media platforms by automatically detecting misleading or fabricated content before it spreads widely. Moreover, the model can be integrated into real-time web applications or browser extensions to help users verify the authenticity of news articles instantly. For future work, several directions can be explored. One area involves using cross-domain transfer learning, allowing the model trained on one type of dataset (e.g., political news) to perform effectively on others (e.g., health or finance). Another promising direction is combining textual and visual analysis, where images and videos accompanying news articles are also verified.
Additionally, incorporating explainable AI (XAI) methods will make fake news detection systems more transparent and trustworthy, helping users understand why a particular article was classified as fake or real. Overall, this study highlights that a well-designed NLP and deep learning–based model can play a vital role in reducing misinformation, promoting digital media literacy, and building a more informed and trustworthy online environment.
References
[1] M. Granik and V. Mesyura, “Fake news detection using naive Bayes classifier,” 2017 IEEE 1st Ukr. Conf.Electr. Comput. Eng. UKRCON 2017 - Proc., pp. 900– 903, 2017
[2] https://indianexpress.com/article/technology/ social/whatsapp-fight against-fake-news-topfeaturesto curb-spread-of misinformation5256782/
[3] P. R. Humanante-Ramos, F. J. GarciaPenalvo, and M. A. Conde Gonzalez, “PLEs in Mobile Contexts: New Ways to Personalize Learning,” Rev. Iberoam. Tecnol. del Aprendiz., vol. 11, no. 4, pp. 220–226, 2016
[4] T. Granskogen and J. A. Gulla, “Fake news detection: Network data from social media used to predict fakes,” CEUR Workshop Proc., vol. 2041, no. 1, pp. 59–66, 2017
[5] R. V. L, C. Yimin, and C. N. J, “Deception detection for news: Three types of fakes,” Proc. Assoc. Inf. Sci. Technol., vol. 52, no. 1, pp. 1 4, 2016.